Skip to content

Latest commit

 

History

History
31 lines (21 loc) · 2.07 KB

20.11 Drawing Samples from Autoencoders.rst

File metadata and controls

31 lines (21 loc) · 2.07 KB

20.11 Drawing Samples from Autoencoders

  • Some kinds of autoencoders, such as variational autoencoders, explicitly represent a probability distribution and admit straightforward ancestral sampling.
  • Most other kinds of autoencoders require MCMC sampling.

20.11.1 Marlove Chain Associated with Any Denoising Autoencoder

Generalized denoising autoencoders aare specified by a denoising distribution for sampling an estimate of the clean input given the corrupted input. Each step of the Markov chain that generates from the estimated distribution consists of the following step:

  1. Starting from the previous state x, inject corruption noise, sampling from C(|x)
  2. Encode into h = f()
  3. Decode h to obtain the parameters w = g(h) of p(x|w = g(h)) = p(x|)
  4. Sample the next state x from p(x|w = g(h)) = p(x|)

image

20.11.2 Clamping the Conditional Sampling

Similar to Boltzmann Machine, denoising autoencoders and their generalizations can be used to sample from a conditional ditribution p(xf|xo), simply by clamping the observed units xo and resampling the free units xf given xo and the sampled latent variables (if any).

20.11.3 Walk-Back Training Procedure

The walk-back training procedure was proposed as a way to accelerate the convergence of generative training of denoising autoencoders. It consists of multiple stochastic encoder-decoder step, initialized at a training exmple and penalizing the last probablistic reconstruction (or all the reconstructions along the way. )